Implementing plastic weights in neural networks using low precision arithmetic

نویسندگان

  • Christopher Johansson
  • Anders Lansner
چکیده

In this letter, we develop a fixed-point arithmetic, low precision, implementation of an exponentially weighted moving average (EWMA) that is used in a neural network with plastic weights. We analyze the proposed design both analytically and experimentally, and we also evaluate its performance in the application of an attractor neural network. The EWMA in the proposed design has a constant relative processes, e.g. connectionist networks. We conclude that the proposed design offers greatly improved memory and computational efficiency compared to a naı̈ve implementation of the EWMA’s difference equation, and that it is well suited for implementation in digital hardware. & 2008 Elsevier B.V. All rights reserved.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Back Propagation with Integer Arithmetic

The present work investigates the significance of arithmetic precision in neural network simulation. Noting that a biological brain consists of a large number of cells of low precision, we try to answer the question: With a fixed size of memory and CPU cycles available for simulation, does a larger sized net with less precision perform better than smaller sized one with higher precision? We eva...

متن کامل

Minimum Energy Quantized Neural Networks

This work targets the automated minimum-energy optimization of Quantized Neural Networks (QNNs) networks using low precision weights and activations. These networks are trained from scratch at an arbitrary fixed point precision. At iso-accuracy, QNNs using fewer bits require deeper and wider network architectures than networks using higher precision operators, while they require less complex ar...

متن کامل

Discontinuities in Recurrent Neural Networks

This article studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net function and a sigmoid-like continuous activation function. We introduce arithmetic networks as ARNN augmented with a few simple discontinuous ...

متن کامل

Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations

We introduce a method to train Quantized Neural Networks (QNNs) — neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operati...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Neurocomputing

دوره 72  شماره 

صفحات  -

تاریخ انتشار 2009